Credibility of Observational Social Research (META-REP)
With 15 individual projects and over 50 participating scientists, META-REP investigates fundamental questions about replicability.
With 15 individual projects and over 50 participating scientists, META-REP investigates fundamental questions about replicability.
Katrin Auspurg is co-applicant and program committee member in the DGF-funded priority program "META-REP: A Meta-scientific Program to Analyze and Optimize Replicability in the Behavioral, Social, and Cognitive Sciences."
META-REP aims to analyze the replicability, robustness, and generalizability of scientific results. It focuses on three questions:
By investigating these questions, the priority program makes a fundamental contribution to understanding replicability as a central quality criterion of empirical research.
A detailed description of the META-REP focus program can be found on the website of the Department of Psychology (LMU).
3:45 | 5 Aug 2025
As part of the DFG's META-REP priority program, Prof. Auspurg and Dr. Schneck have successfully secured DFG funding for a sub-project on the robustness and sensitivity of results using non-experimental data in the social sciences. The goal of this project is to develop diagnostic tools (based in particular on computer-assisted statistical analyses such as "multiverse" or "multi-model" analyses) for the robustness of results with respect to different model and sample specifications.
In a second step, these tools will then be used to assess the robustness and thus the credibility of articles published in leading social science journals and/or based on data from the European Social Survey (ESS).
First, we will check whether the results of the articles can be reproduced using the authors' original code. In addition, this inventory will be linked to initial analyses of risk and incentive structures for more or less credible social science (based on theories from analytical sociology and quantitative science studies). Another project goal is to develop evidence-based proposals for tools to increase robust and credible results in the context of more transparent "open" science (including new publication formats).
The project conducts analyses on the reproducibility and robustness of articles that use the same"large-N" observational data (European Social Survey). However, the articles differ in terms of disciplines, journals, author constellations, and other factors that are likely to be associated with varying reproduction rates.
The sub-project proceeds in four steps:
We aim to conduct a comprehensive audit of potential threats to transparent and credible research. Unlike existing audits, we take into account publications with both high and low impact. In addition, our audit enables comparisons between disciplines with varying degrees of implementation of the "FAIR principles" (findable, accessible, interoperable, reusable materials). This allows us to identify research areas where measures for improvement appear particularly appropriate and effective. The three closely related, overarching research goals for the second project phase are: